Distributed Optimization for Non-Strongly Convex Regularizers
نویسندگان
چکیده
We develop primal-dual algorithms for distributed training of linear models in the Spark framework. We present the ProxCoCoA+ method which represents a generalization of the CoCoA+ algorithm and extends it to the case of general strongly convex regularizers. A primal-dual convergence rate analysis is provided along with an experimental evaluation of the algorithm on the problem of elastic net regularized logistic regression. We also develop the PrimalCoCoA+ method, a method that allows certain non-strongly convex regularizers to be trained in the ProxCoCoA+ theoretical framework; the algorithm works under the assumption that this regularizers are linearly separable and box constrained. This allows for primal-dual convergence rates for L1 regularized models, which are, to the best of our knowledge, the first of their kind; we also evaluate the practical efficiency of this method in the case of L1 regularized logistic on two real world datasets. Finally, we experimentally explore and prove the validity of ProxCoCoA+ Wild and PrimalCoCoA+ Wild, two new optimization methods that combine distributed and parallel optimization techniques and achieve significant speed-ups with respect to their non-wild variants.
منابع مشابه
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for the distributed environment, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly conv...
متن کاملHigh-dimensional Inference via Lipschitz Sparsity-Yielding Regularizers
Non-convex regularizers are more and more applied to high-dimensional inference with sparsity prior knowledge. In general, the nonconvex regularizer is superior to the convex ones in inference but it suffers the difficulties brought by local optimums and massive computation. A ”good” regularizer should perform well in both inference and optimization. In this paper, we prove that some non-convex...
متن کاملPrimal-Dual convex optimization in large deformation diffeomorphic registration with robust regularizers
This paper proposes a method for primal-dual convex optimization in variational Large Deformation Diffeomorphic Metric Mapping (LDDMM) problems formulated with robust regularizers and image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorit...
متن کاملA Comparison of Algorithms for Learning with Nonconvex Regularization
Convex regularizers is popular for sparse and low-rank learning, mainly due to their nice statistical and optimization guarantee. However, they often lead to bias estimation, thus the sparsity and accuracy is not as good as desired. This motivates replacement of convex regularizers with nonconvex one, and recently many nonconvex regularizers have been proposed and indeed better performance than...
متن کاملAnalysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016